Conditional variational models, using either continuous or discrete latent variables, are powerful for open-domain dialogue response generation. However, previous works show that continuous latent variables tend to reduce the coherence of generated responses. In this paper, we also found that discrete latent variables have difficulty capturing more diverse expressions. To tackle these problems, we combine the merits of both continuous and discrete latent variables and propose a Hybrid Latent Variable (HLV) method. Specifically, HLV constrains the global semantics of responses through discrete latent variables and enriches responses with continuous latent variables. Thus, we diversify the generated responses while maintaining relevance and coherence. In addition, we propose Conditional Hybrid Variational Transformer (CHVT) to construct and to utilize HLV with transformers for dialogue generation. Through fine-grained symbolic-level semantic information and additive Gaussian mixing, we construct the distribution of continuous variables, prompting the generation of diverse expressions. Meanwhile, to maintain the relevance and coherence, the discrete latent variable is optimized by self-separation training. Experimental results on two dialogue generation datasets (DailyDialog and Opensubtitles) show that CHVT is superior to traditional transformer-based variational mechanism w.r.t. diversity, relevance and coherence metrics. Moreover, we also demonstrate the benefit of applying HLV to fine-tuning two pre-trained dialogue models (PLATO and BART-base).
translated by 谷歌翻译
Complex dialogue mappings (CDM), including one-to-many and many-to-one mappings, tend to make dialogue models generate incoherent or dull responses, and modeling these mappings remains a huge challenge for neural dialogue systems. To alleviate these problems, methods like introducing external information, reconstructing the optimization function, and manipulating data samples are proposed, while they primarily focus on avoiding training with CDM, inevitably weakening the model's ability of understanding CDM in human conversations and limiting further improvements in model performance. This paper proposes a Sentence Semantic \textbf{Seg}mentation guided \textbf{C}onditional \textbf{V}ariational \textbf{A}uto-\textbf{E}ncoder (SegCVAE) method which can model and take advantages of the CDM data. Specifically, to tackle the incoherent problem caused by one-to-many, SegCVAE uses response-related prominent semantics to constrained the latent variable. To mitigate the non-diverse problem brought by many-to-one, SegCVAE segments multiple prominent semantics to enrich the latent variables. Three novel components, Internal Separation, External Guidance, and Semantic Norms, are proposed to achieve SegCVAE. On dialogue generation tasks, both the automatic and human evaluation results show that SegCVAE achieves new state-of-the-art performance.
translated by 谷歌翻译
Vision transformers (ViTs) encoding an image as a sequence of patches bring new paradigms for semantic segmentation.We present an efficient framework of representation separation in local-patch level and global-region level for semantic segmentation with ViTs. It is targeted for the peculiar over-smoothness of ViTs in semantic segmentation, and therefore differs from current popular paradigms of context modeling and most existing related methods reinforcing the advantage of attention. We first deliver the decoupled two-pathway network in which another pathway enhances and passes down local-patch discrepancy complementary to global representations of transformers. We then propose the spatially adaptive separation module to obtain more separate deep representations and the discriminative cross-attention which yields more discriminative region representations through novel auxiliary supervisions. The proposed methods achieve some impressive results: 1) incorporated with large-scale plain ViTs, our methods achieve new state-of-the-art performances on five widely used benchmarks; 2) using masked pre-trained plain ViTs, we achieve 68.9% mIoU on Pascal Context, setting a new record; 3) pyramid ViTs integrated with the decoupled two-pathway network even surpass the well-designed high-resolution ViTs on Cityscapes; 4) the improved representations by our framework have favorable transferability in images with natural corruptions. The codes will be released publicly.
translated by 谷歌翻译
In this paper, we propose PanoViT, a panorama vision transformer to estimate the room layout from a single panoramic image. Compared to CNN models, our PanoViT is more proficient in learning global information from the panoramic image for the estimation of complex room layouts. Considering the difference between a perspective image and an equirectangular image, we design a novel recurrent position embedding and a patch sampling method for the processing of panoramic images. In addition to extracting global information, PanoViT also includes a frequency-domain edge enhancement module and a 3D loss to extract local geometric features in a panoramic image. Experimental results on several datasets demonstrate that our method outperforms state-of-the-art solutions in room layout prediction accuracy.
translated by 谷歌翻译
完全监督的对数异常检测方法需要大量标记的数据才能实现有希望的性能。因此,如何减轻注释大量未标记的日志数据的沉重负担受到了很多关注。最近,已经提出了许多半监督对数异常检测方法,以借助于标记的正常数据解析的模板来降低注释成本。但是,这些方法通常独立考虑每个关键字,这无视日志事件中关键字之间的相关性以及日志序列之间的上下文关系。在本文中,我们提出了一个新型的弱监督的对数异常检测框架,名为Loglg,以探索序列中关键字之间的语义连接。具体而言,我们设计了一个迭代过程,首先提取未标记的日志的关键字以在每次迭代中构造日志事件图。然后,我们构建一个子记录注释,以更改为未标记的日志序列生成伪标签的目的,以注释相应的log-subgraphs。为了改善注释质量,我们采取了自我监督的任务来预先培训子图注释。之后,使用子图注释者生成的伪标签训练对数异常检测模型。在分类结果的条件下,我们从分类的日志序列重新提取关键字,并为下一个迭代更新日志事件图。五个基准的实验验证了LogLG在未标记的日志数据上检测异常的有效性,并证明与现有的半监督方法相比,Loglg作为最新的弱监督方法,可以取得重大改进。
translated by 谷歌翻译
奖励设计是强化学习(RL)的根本问题。错过或设计不佳的奖励可能导致样品效率低和不期望的行为。在本文中,我们提出了\ texit {programmatic奖励设计}的想法,即使用程序在RL环境中指定奖励函数。程序允许人工工程师以结构化和可意识的方式表达子目标和复杂的任务场景。然而,程序奖励设计的挑战是,虽然人类可以提供高级结构,适当地设置低级细节,例如对特定子任务的正确奖励量仍然困难。本文的主要贡献是概率框架,可以从专家演示中推断出最佳候选程序奖励功能。灵感来自最近的生成 - 对策方法,我们的框架{搜索最有可能的编程奖励功能,在那时最佳生成的轨迹无法与所公示的轨迹界别区别}。实验结果表明,使用此框架学习的编程奖励功能可以显着优于使用现有奖励学习算法的学习者,并使RL代理能够在高度复杂的任务上实现最先进的性能。
translated by 谷歌翻译
语义分割是自主车辆了解周围场景的关键技术。当代模型的吸引力表现通常以牺牲重计算和冗长的推理时间为代价,这对于自行车来说是无法忍受的。在低分辨率图像上使用轻量级架构(编码器 - 解码器或双路)或推理,最近的方法实现了非常快的场景解析,即使在单个1080TI GPU上以100多件FPS运行。然而,这些实时方法与基于扩张骨架的模型之间的性能仍有显着差距。为了解决这个问题,我们提出了一家专门为实时语义细分设计的高效底座。所提出的深层双分辨率网络(DDRNET)由两个深部分支组成,之间进行多个双边融合。此外,我们设计了一个名为Deep聚合金字塔池(DAPPM)的新上下文信息提取器,以基于低分辨率特征映射放大有效的接收字段和熔丝多尺度上下文。我们的方法在城市景观和Camvid数据集上的准确性和速度之间实现了新的最先进的权衡。特别是,在单一的2080Ti GPU上,DDRNET-23-Slim在Camvid测试组上的Citycapes试验组102 FPS上的102 FPS,74.7%Miou。通过广泛使用的测试增强,我们的方法优于最先进的模型,需要计算得多。 CODES和培训的型号在线提供。
translated by 谷歌翻译
我们考虑非平稳马尔可夫决策过程中的无模型增强学习(RL)。只要其累积变化不超过某些变化预算,奖励功能和国家过渡功能都可以随时间随时间变化。我们提出了重新启动的Q学习,以上置信度范围(RestartQ-UCB),这是第一个用于非平稳RL的无模型算法,并表明它在动态遗憾方面优于现有的解决方案。具体而言,带有freedman型奖励项的restartq-ucb实现了$ \ widetilde {o}(s^{\ frac {1} {3}} {\ frac {\ frac {1} {1} {3}} {3}} {3}} {3}} {3}} {3}} {3}} {3}} {\ delta ^{\ frac {1} {3}} h t^{\ frac {2} {3}}} $,其中$ s $和$ a $分别是$ \ delta> 0 $的状态和动作的数字是变化预算,$ h $是每集的时间步数,而$ t $是时间步长的总数。我们进一步提出了一种名为Double-Restart Q-UCB的无参数算法,该算法不需要事先了解变化预算。我们证明我们的算法是\ emph {几乎是最佳},通过建立$ \ omega的信息理论下限(s^{\ frac {1} {1} {3}}} a^{\ frac {1} {1} {3}}}}}} \ delta^{\ frac {1} {3}} h^{\ frac {2} {3}}}} t^{\ frac {2} {3}}} $,是非稳态RL中的第一个下下限。数值实验可以根据累积奖励和计算效率来验证RISTARTQ-UCB的优势。我们在相关产品的多代理RL和库存控制的示例中证明了我们的结果的力量。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译